About the IODS course

In this course, I will learn how to use R for statistical analysis.

My GitHub repository: https://github.com/jsimola/IODS-project


Week 2 - Regression and model validation

Read the data

rm(list = ls()) # clear workspace first
learning2014 <- read.csv("~/Documents/GitHub/IODS-project/learning2014.csv") # my own data wrangling
# read.table("~/Documents/GitHub/IODS-project/data/learning2014.txt", sep = ",") # Kimmo's data

Explore the structure and dimensions of the dataset

str(learning2014)
## 'data.frame':    166 obs. of  7 variables:
##  $ gender  : Factor w/ 2 levels "F","M": 1 2 1 2 2 1 2 1 2 1 ...
##  $ Age     : int  53 55 49 53 49 38 50 37 37 42 ...
##  $ Attitude: int  37 31 25 35 37 38 35 29 38 21 ...
##  $ deep    : num  3.58 2.92 3.5 3.5 3.67 ...
##  $ stra    : num  3.38 2.75 3.62 3.12 3.62 ...
##  $ surf    : num  2.58 3.17 2.25 2.25 2.83 ...
##  $ Points  : int  25 12 24 10 22 21 21 31 24 26 ...
dim(learning2014)
## [1] 166   7

The data includes 7 variables and 166 observations. The variables are: 1. Gender (Female = 1 Male = 2) 2. Age 3. Attitude 4-6. Mean scores of the deep, strategic and surface learning 7. Exam points

library(ggplot2) # Access the gglot2 library

Show a graphical overview of the data

# show gender distributions as bar graph
p1 <- ggplot(learning2014, aes(gender))
p1 + geom_bar()

# display variable distributions as histogram
p2 <- ggplot(learning2014, aes(Age))
p2 + geom_histogram(binwidth = 5)

p3 <- ggplot(learning2014, aes(Attitude))
p3 + geom_histogram(binwidth = 2)

p4 <- ggplot(learning2014, aes(deep))
p4 + geom_histogram(binwidth = 0.5)

p5 <- ggplot(learning2014, aes(stra))
p5 + geom_histogram(binwidth = 0.5)

p6 <- ggplot(learning2014, aes(surf))
p6 + geom_histogram(binwidth = 0.5)

p7 <- ggplot(learning2014, aes(Points))
p7 + geom_histogram(binwidth = 2)

# show relationships between variables 
p8 <- ggplot(learning2014, aes(x = Attitude, y = Points, col=gender))
p8 + geom_point() + ggtitle("Relationship between exam points and deep learning") + geom_smooth(method = "lm")

p9 <- ggplot(learning2014, aes(x = deep, y = Points, col=gender))
p9 + geom_point() + ggtitle("Relationship between exam points and deep learning") + geom_smooth(method = "lm")

p10 <- ggplot(learning2014, aes(x = stra, y = Points, col=gender))
p10 + geom_point() + ggtitle("Relationship between exam points and strategic learning") + geom_smooth(method = "lm")

p11 <- ggplot(learning2014, aes(x = surf, y = Points, col=gender))
p11 + geom_point() + ggtitle("Relationship between exam points and surface learning") + geom_smooth(method = "lm")

p12 <- ggplot(learning2014, aes(x = Age, y = Points, col=gender))
p12 + geom_point() + ggtitle("Relationship between age and exam points") + geom_smooth(method = "lm")

p13 <- ggplot(learning2014, aes(x = Age, y = Attitude, col=gender))
p13 + geom_point() + ggtitle("Relationship between age and attitudes") + geom_smooth(method = "lm")

library(GGally)
pairs(learning2014[-1], col = learning2014$gender)

p <- ggpairs(learning2014, mapping = aes(col=gender, alpha = 0.3), lower = list(combo = wrap("facethist", bins = 20)))

# draw the plot
p

Summary of the variables

summary(learning2014)
##  gender       Age           Attitude          deep            stra      
##  F:110   Min.   :17.00   Min.   :14.00   Min.   :1.583   Min.   :1.250  
##  M: 56   1st Qu.:21.00   1st Qu.:26.00   1st Qu.:3.333   1st Qu.:2.625  
##          Median :22.00   Median :32.00   Median :3.667   Median :3.188  
##          Mean   :25.51   Mean   :31.43   Mean   :3.680   Mean   :3.121  
##          3rd Qu.:27.00   3rd Qu.:37.00   3rd Qu.:4.083   3rd Qu.:3.625  
##          Max.   :55.00   Max.   :50.00   Max.   :4.917   Max.   :5.000  
##       surf           Points     
##  Min.   :1.583   Min.   : 7.00  
##  1st Qu.:2.417   1st Qu.:19.00  
##  Median :2.833   Median :23.00  
##  Mean   :2.787   Mean   :22.72  
##  3rd Qu.:3.167   3rd Qu.:27.75  
##  Max.   :4.333   Max.   :33.00
# another way of summarising the variables
#library(dplyr)
#learning2014 %>%
#  group_by(gender) %>%
#  summarise(mean = mean(Attitude), n = n())

The sample consist of mainly female participants. The majority of participants are between 20 to 30 years old. The variables are mostly normally distributed. The attitudes predict the exam points, but the learning strategies do not explain the exam points. Age does not explain the exam points or attitudes.

Fitting of a regression model to study whether attitudes, strategic and surface learning strategies explain the exam points. The learning strategies (strategic and surface) did not explain the exam points significantly. The exam points were significantly (p = 4.12e-09) explained by attitudes

# fit a linear model
my_model1 <- lm(Points ~ Attitude + stra + surf, data = learning2014) # how to use three explanatory vars
summary(my_model1)
## 
## Call:
## lm(formula = Points ~ Attitude + stra + surf, data = learning2014)
## 
## Residuals:
##      Min       1Q   Median       3Q      Max 
## -17.1550  -3.4346   0.5156   3.6401  10.8952 
## 
## Coefficients:
##             Estimate Std. Error t value Pr(>|t|)    
## (Intercept) 11.01711    3.68375   2.991  0.00322 ** 
## Attitude     0.33952    0.05741   5.913 1.93e-08 ***
## stra         0.85313    0.54159   1.575  0.11716    
## surf        -0.58607    0.80138  -0.731  0.46563    
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## Residual standard error: 5.296 on 162 degrees of freedom
## Multiple R-squared:  0.2074, Adjusted R-squared:  0.1927 
## F-statistic: 14.13 on 3 and 162 DF,  p-value: 3.156e-08
my_model2 <- lm(Points ~ Attitude, data = learning2014) 
summary(my_model2)
## 
## Call:
## lm(formula = Points ~ Attitude, data = learning2014)
## 
## Residuals:
##      Min       1Q   Median       3Q      Max 
## -16.9763  -3.2119   0.4339   4.1534  10.6645 
## 
## Coefficients:
##             Estimate Std. Error t value Pr(>|t|)    
## (Intercept) 11.63715    1.83035   6.358 1.95e-09 ***
## Attitude     0.35255    0.05674   6.214 4.12e-09 ***
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## Residual standard error: 5.32 on 164 degrees of freedom
## Multiple R-squared:  0.1906, Adjusted R-squared:  0.1856 
## F-statistic: 38.61 on 1 and 164 DF,  p-value: 4.119e-09

Produce the diagnostic plots: Residuals vs Fitted values, Normal QQ-plot and Residuals vs Leverage

plot(my_model2)


Week 3 - Logistic regression

Read and explore the data

rm(list = ls())  # clear workspace
alc <- read.csv("~/Documents/GitHub/IODS-project/data/alc.csv") # my own data wrangling
variable.names(alc) 
##  [1] "school"     "sex"        "age"        "address"    "famsize"   
##  [6] "Pstatus"    "Medu"       "Fedu"       "Mjob"       "Fjob"      
## [11] "reason"     "nursery"    "internet"   "guardian"   "traveltime"
## [16] "studytime"  "failures"   "schoolsup"  "famsup"     "paid"      
## [21] "activities" "higher"     "romantic"   "famrel"     "freetime"  
## [26] "goout"      "Dalc"       "Walc"       "health"     "absences"  
## [31] "G1"         "G2"         "G3"         "alc_use"    "high_use"

Student Performance Data Set Information

This data describes student performance in two Portuguese schools. The data attributes include school, student grades, demographic, social and school related features) and it was collected by using school reports and questionnaires. The data were combined from two datasets: (1) performance in mathematics and (2) portuguese language. The grades are means of Math and Portuguese: G1 - first period grade (numeric: from 0 to 20) G2 - second period grade (numeric: from 0 to 20) G3 - final grade (numeric: from 0 to 20, output target)

Hypotheses of the relationship between high/low alcohol consumption and four other variables

  • H1: final grade (G3) is higher for low alcohol consumers
  • H2: high alchohol use is associated with elevated number of school absences
  • H3: low alcohol use is associated with better health
  • H4: low alchohol use is associated with longer study time per week

The distributions of the chosen variables and their relationships with alcohol consumption

# access the tidyverse libraries tidyr, dplyr, ggplot2
library(tidyr); library(dplyr); library(ggplot2)
## 
## Attaching package: 'dplyr'
## The following object is masked from 'package:GGally':
## 
##     nasa
## The following objects are masked from 'package:stats':
## 
##     filter, lag
## The following objects are masked from 'package:base':
## 
##     intersect, setdiff, setequal, union
chosen_vars <- c("high_use","G3","absences","health","studytime") # choose relevant vars
chosen_data <- select(alc, one_of(chosen_vars))
#chosen_data$high_useN <- as.numeric(chosen_data$high_use)
gather(chosen_data) %>% glimpse
## Observations: 1,910
## Variables: 2
## $ key   <chr> "high_use", "high_use", "high_use", "high_use", "high_us...
## $ value <int> 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1,...
gather(chosen_data) %>% ggplot(aes(value)) + facet_wrap("key", scales = "free") + geom_bar() # bar plots

# box plots
ggplot(chosen_data, aes(x=high_use, y = G3)) + geom_boxplot()

ggplot(chosen_data, aes(x=high_use, y = absences)) + geom_boxplot()

ggplot(chosen_data, aes(x=high_use, y = health)) + geom_boxplot()

ggplot(chosen_data, aes(x=high_use, y = studytime)) + geom_boxplot()

Distibutions:

  • The distibution of school absences is right skewed with only a few students having more absences of more than 10, while most students are between 0-5.
  • Final grade is almost normally distributed.
  • Healt is left skewed, because most students have very good health (= 5).
  • Low alcohol consumption is more common than high alcohol use (approx 25 %).
  • Study time is almost normally distributed.

The effect of alcohol consumption on the selected variables:

  • Final grade is better for low alcohol consumers as suggested by H1.
  • The number of school absences is higher for those who use more alcohol as suggested by H2.
  • No difference in health between high and low alchohol consumers, contrary to H3.
  • Study time is longer for those who have low alcohol consumption as suggested by H4.

Using logistic regression to statistically explore the relationship between the chosen variables and alcohol consumption.

m <- glm(high_use ~ G3 + absences + health + studytime, data = chosen_data, family = "binomial")  # find the model with glm()
summary(m) # summary of the model
## 
## Call:
## glm(formula = high_use ~ G3 + absences + health + studytime, 
##     family = "binomial", data = chosen_data)
## 
## Deviance Residuals: 
##     Min       1Q   Median       3Q      Max  
## -2.2434  -0.8399  -0.6601   1.1619   2.1430  
## 
## Coefficients:
##             Estimate Std. Error z value Pr(>|z|)    
## (Intercept)  0.11336    0.61876   0.183 0.854635    
## G3          -0.05307    0.03602  -1.473 0.140698    
## absences     0.07837    0.02274   3.446 0.000569 ***
## health       0.06701    0.08551   0.784 0.433240    
## studytime   -0.50608    0.15731  -3.217 0.001295 ** 
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## (Dispersion parameter for binomial family taken to be 1)
## 
##     Null deviance: 465.68  on 381  degrees of freedom
## Residual deviance: 430.62  on 377  degrees of freedom
## AIC: 440.62
## 
## Number of Fisher Scoring iterations: 4
OddsRatios <- coef(m) %>% exp
ConfInt <- confint(m) %>% exp
## Waiting for profiling to be done...
cbind(OddsRatios, ConfInt)
##             OddsRatios     2.5 %    97.5 %
## (Intercept)  1.1200365 0.3308837 3.7712228
## G3           0.9483123 0.8833403 1.0177770
## absences     1.0815248 1.0366149 1.1334569
## health       1.0693034 0.9055302 1.2671462
## studytime    0.6028539 0.4387386 0.8140308

Interpretation of the results:

  • The odds ratio (probability of high alcohol use) for absences and health are higher than for the final grade and study time.
  • Curiously, however, the coeffients for absences and study time are significant (at p < 0.001 and p < .01, respectively).
  • Absences and alcohol use show a positive relationship. This result is in line with H2 that predicted more absences for those who consume more alcochol.
  • The relationship between study time and alcohol use is negative and thus follows the prediction of H4. That is, low alcohol use is associated with longer study time.
  • Logistic regression showed that alcohol use is not related to the final grade (G3) and health.

Selecting absences and studytime that had a statistical relationship with alcohol consumption.

m2 <- glm(high_use ~ absences + studytime, data = chosen_data, family = "binomial") # find the model with glm()
summary(m2) 
## 
## Call:
## glm(formula = high_use ~ absences + studytime, family = "binomial", 
##     data = chosen_data)
## 
## Deviance Residuals: 
##     Min       1Q   Median       3Q      Max  
## -2.2128  -0.8387  -0.7046   1.1996   2.1832  
## 
## Coefficients:
##             Estimate Std. Error z value Pr(>|z|)    
## (Intercept) -0.16640    0.33954  -0.490 0.624087    
## absences     0.08054    0.02285   3.524 0.000425 ***
## studytime   -0.55015    0.15550  -3.538 0.000403 ***
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## (Dispersion parameter for binomial family taken to be 1)
## 
##     Null deviance: 465.68  on 381  degrees of freedom
## Residual deviance: 433.65  on 379  degrees of freedom
## AIC: 439.65
## 
## Number of Fisher Scoring iterations: 4
# predict() the probability of high_use
probabilities <- predict(m2, type = "response")

# add the predicted probabilities to 'chosen_data'
chosen_data <- mutate(chosen_data, probability = probabilities)
chosen_data <- mutate(chosen_data, prediction = probability > 0.5)

# 2x2 cross tabulation of predictions versus the actual values
t <- table(high_use = chosen_data$high_use, prediction = chosen_data$prediction)
t
##         prediction
## high_use FALSE TRUE
##    FALSE   256   12
##    TRUE     96   18
# The total proportion of inaccurately classified individuals (= the training error) 
NumErr = t[2,1] + t[1,2] 
Tot = 382 # num obs
PropErr = round(NumErr / Tot, 2)
Accuracy = 100 - (PropErr*100)

The model has high predictive power. In total, the model achieved an accurcay of 72% in correct classification of individuals to high/low alcohol consumers based on their school absences and study time. This much better than guessing which would give an accuracy of 50%.


Week 4 - Clustering and classification

rm(list = ls())  # clear workspace
library(MASS)
## 
## Attaching package: 'MASS'
## The following object is masked from 'package:dplyr':
## 
##     select
data("Boston")
str(Boston)
## 'data.frame':    506 obs. of  14 variables:
##  $ crim   : num  0.00632 0.02731 0.02729 0.03237 0.06905 ...
##  $ zn     : num  18 0 0 0 0 0 12.5 12.5 12.5 12.5 ...
##  $ indus  : num  2.31 7.07 7.07 2.18 2.18 2.18 7.87 7.87 7.87 7.87 ...
##  $ chas   : int  0 0 0 0 0 0 0 0 0 0 ...
##  $ nox    : num  0.538 0.469 0.469 0.458 0.458 0.458 0.524 0.524 0.524 0.524 ...
##  $ rm     : num  6.58 6.42 7.18 7 7.15 ...
##  $ age    : num  65.2 78.9 61.1 45.8 54.2 58.7 66.6 96.1 100 85.9 ...
##  $ dis    : num  4.09 4.97 4.97 6.06 6.06 ...
##  $ rad    : int  1 2 2 3 3 3 5 5 5 5 ...
##  $ tax    : num  296 242 242 222 222 222 311 311 311 311 ...
##  $ ptratio: num  15.3 17.8 17.8 18.7 18.7 18.7 15.2 15.2 15.2 15.2 ...
##  $ black  : num  397 397 393 395 397 ...
##  $ lstat  : num  4.98 9.14 4.03 2.94 5.33 ...
##  $ medv   : num  24 21.6 34.7 33.4 36.2 28.7 22.9 27.1 16.5 18.9 ...

Boston data set

This data has 506 osbervations (rows) and 14 variables (columns). The data includes information about housing values of suburbs of Boston, such as crime rate, business, accessibility to highways, and characteristics of the citizens in the area.

Plot the distibutions of the variables

gather(Boston) %>% glimpse
## Observations: 7,084
## Variables: 2
## $ key   <chr> "crim", "crim", "crim", "crim", "crim", "crim", "crim", ...
## $ value <dbl> 0.00632, 0.02731, 0.02729, 0.03237, 0.06905, 0.02985, 0....
gather(Boston) %>% ggplot(aes(value)) + facet_wrap("key", scales = "free") + geom_histogram() # bar plots
## `stat_bin()` using `bins = 30`. Pick better value with `binwidth`.

  • ‘age’, ‘black’ and ‘ptratio’ are left skewed
  • ‘dis’, ‘Istat’ and ‘nox’ are right skewed
  • most values of ‘chas’, ‘crim’ and ‘zn’ are zeros
  • ‘indus’, ‘rad’ and ‘tax’ are bimodal
  • ‘medv’ and ‘rm’ roughly normally distributed

Summarize data

summary(Boston)
##       crim                zn             indus            chas        
##  Min.   : 0.00632   Min.   :  0.00   Min.   : 0.46   Min.   :0.00000  
##  1st Qu.: 0.08204   1st Qu.:  0.00   1st Qu.: 5.19   1st Qu.:0.00000  
##  Median : 0.25651   Median :  0.00   Median : 9.69   Median :0.00000  
##  Mean   : 3.61352   Mean   : 11.36   Mean   :11.14   Mean   :0.06917  
##  3rd Qu.: 3.67708   3rd Qu.: 12.50   3rd Qu.:18.10   3rd Qu.:0.00000  
##  Max.   :88.97620   Max.   :100.00   Max.   :27.74   Max.   :1.00000  
##       nox               rm             age              dis        
##  Min.   :0.3850   Min.   :3.561   Min.   :  2.90   Min.   : 1.130  
##  1st Qu.:0.4490   1st Qu.:5.886   1st Qu.: 45.02   1st Qu.: 2.100  
##  Median :0.5380   Median :6.208   Median : 77.50   Median : 3.207  
##  Mean   :0.5547   Mean   :6.285   Mean   : 68.57   Mean   : 3.795  
##  3rd Qu.:0.6240   3rd Qu.:6.623   3rd Qu.: 94.08   3rd Qu.: 5.188  
##  Max.   :0.8710   Max.   :8.780   Max.   :100.00   Max.   :12.127  
##       rad              tax           ptratio          black       
##  Min.   : 1.000   Min.   :187.0   Min.   :12.60   Min.   :  0.32  
##  1st Qu.: 4.000   1st Qu.:279.0   1st Qu.:17.40   1st Qu.:375.38  
##  Median : 5.000   Median :330.0   Median :19.05   Median :391.44  
##  Mean   : 9.549   Mean   :408.2   Mean   :18.46   Mean   :356.67  
##  3rd Qu.:24.000   3rd Qu.:666.0   3rd Qu.:20.20   3rd Qu.:396.23  
##  Max.   :24.000   Max.   :711.0   Max.   :22.00   Max.   :396.90  
##      lstat            medv      
##  Min.   : 1.73   Min.   : 5.00  
##  1st Qu.: 6.95   1st Qu.:17.02  
##  Median :11.36   Median :21.20  
##  Mean   :12.65   Mean   :22.53  
##  3rd Qu.:16.95   3rd Qu.:25.00  
##  Max.   :37.97   Max.   :50.00

Correlations between the variables

cor_matrix <-cor(Boston) 
cor_matrix <-round(cor_matrix, digits=2)
cor_matrix 
##          crim    zn indus  chas   nox    rm   age   dis   rad   tax
## crim     1.00 -0.20  0.41 -0.06  0.42 -0.22  0.35 -0.38  0.63  0.58
## zn      -0.20  1.00 -0.53 -0.04 -0.52  0.31 -0.57  0.66 -0.31 -0.31
## indus    0.41 -0.53  1.00  0.06  0.76 -0.39  0.64 -0.71  0.60  0.72
## chas    -0.06 -0.04  0.06  1.00  0.09  0.09  0.09 -0.10 -0.01 -0.04
## nox      0.42 -0.52  0.76  0.09  1.00 -0.30  0.73 -0.77  0.61  0.67
## rm      -0.22  0.31 -0.39  0.09 -0.30  1.00 -0.24  0.21 -0.21 -0.29
## age      0.35 -0.57  0.64  0.09  0.73 -0.24  1.00 -0.75  0.46  0.51
## dis     -0.38  0.66 -0.71 -0.10 -0.77  0.21 -0.75  1.00 -0.49 -0.53
## rad      0.63 -0.31  0.60 -0.01  0.61 -0.21  0.46 -0.49  1.00  0.91
## tax      0.58 -0.31  0.72 -0.04  0.67 -0.29  0.51 -0.53  0.91  1.00
## ptratio  0.29 -0.39  0.38 -0.12  0.19 -0.36  0.26 -0.23  0.46  0.46
## black   -0.39  0.18 -0.36  0.05 -0.38  0.13 -0.27  0.29 -0.44 -0.44
## lstat    0.46 -0.41  0.60 -0.05  0.59 -0.61  0.60 -0.50  0.49  0.54
## medv    -0.39  0.36 -0.48  0.18 -0.43  0.70 -0.38  0.25 -0.38 -0.47
##         ptratio black lstat  medv
## crim       0.29 -0.39  0.46 -0.39
## zn        -0.39  0.18 -0.41  0.36
## indus      0.38 -0.36  0.60 -0.48
## chas      -0.12  0.05 -0.05  0.18
## nox        0.19 -0.38  0.59 -0.43
## rm        -0.36  0.13 -0.61  0.70
## age        0.26 -0.27  0.60 -0.38
## dis       -0.23  0.29 -0.50  0.25
## rad        0.46 -0.44  0.49 -0.38
## tax        0.46 -0.44  0.54 -0.47
## ptratio    1.00 -0.18  0.37 -0.51
## black     -0.18  1.00 -0.37  0.33
## lstat      0.37 -0.37  1.00 -0.74
## medv      -0.51  0.33 -0.74  1.00
corrplot(cor_matrix, method = "color") 

Overall, the variables show strong correlations with each other, therefore only the strongest (> 0.7 or < -0.7) are interpreted below

  • ‘indus’ correlates positively with ‘nox’ (0.76) and ‘tax’ (0.72) and negatively with ‘dis’ (-0.71), suggesting that the proportion of non-retail business acres is associated with higher nitrogen oxides concentration and tax rate and with shorter distances from Boston employment centres.

  • ‘nox’ correlates positively with ‘age’ (0.73) and negatively with ‘dis’ (-0.77), suggesting that the proportion of units built prior to 1940 is associated with higher nitrogen oxides concentration and shorter distance from employment centres.
  • ‘rm’ correlates positively with ‘medv’ (0.7), suggesting that the average number of rooms is associated with higher median of owner-occupied homes in $1000s.

  • ‘age’ correlates negatively with ‘dis’ (-0.75), suggesting that the proportion of units built prior to 1940 is associated with shorter distance from employment centres.

  • ‘rad’ correlates with ‘tax’ (0.91): accessibility to highways is highly correlated with tax rate.

  • ‘lstat’ correlates negatively with ‘medv’ (-0.74): the percentage of lower status population (is this appropriate??) is associated with lower amount of owner-occupied homes in $1000s.

Standardize the data

boston_scaled <- scale(Boston)
summary(boston_scaled)
##       crim                 zn               indus        
##  Min.   :-0.419367   Min.   :-0.48724   Min.   :-1.5563  
##  1st Qu.:-0.410563   1st Qu.:-0.48724   1st Qu.:-0.8668  
##  Median :-0.390280   Median :-0.48724   Median :-0.2109  
##  Mean   : 0.000000   Mean   : 0.00000   Mean   : 0.0000  
##  3rd Qu.: 0.007389   3rd Qu.: 0.04872   3rd Qu.: 1.0150  
##  Max.   : 9.924110   Max.   : 3.80047   Max.   : 2.4202  
##       chas              nox                rm               age         
##  Min.   :-0.2723   Min.   :-1.4644   Min.   :-3.8764   Min.   :-2.3331  
##  1st Qu.:-0.2723   1st Qu.:-0.9121   1st Qu.:-0.5681   1st Qu.:-0.8366  
##  Median :-0.2723   Median :-0.1441   Median :-0.1084   Median : 0.3171  
##  Mean   : 0.0000   Mean   : 0.0000   Mean   : 0.0000   Mean   : 0.0000  
##  3rd Qu.:-0.2723   3rd Qu.: 0.5981   3rd Qu.: 0.4823   3rd Qu.: 0.9059  
##  Max.   : 3.6648   Max.   : 2.7296   Max.   : 3.5515   Max.   : 1.1164  
##       dis               rad               tax             ptratio       
##  Min.   :-1.2658   Min.   :-0.9819   Min.   :-1.3127   Min.   :-2.7047  
##  1st Qu.:-0.8049   1st Qu.:-0.6373   1st Qu.:-0.7668   1st Qu.:-0.4876  
##  Median :-0.2790   Median :-0.5225   Median :-0.4642   Median : 0.2746  
##  Mean   : 0.0000   Mean   : 0.0000   Mean   : 0.0000   Mean   : 0.0000  
##  3rd Qu.: 0.6617   3rd Qu.: 1.6596   3rd Qu.: 1.5294   3rd Qu.: 0.8058  
##  Max.   : 3.9566   Max.   : 1.6596   Max.   : 1.7964   Max.   : 1.6372  
##      black             lstat              medv        
##  Min.   :-3.9033   Min.   :-1.5296   Min.   :-1.9063  
##  1st Qu.: 0.2049   1st Qu.:-0.7986   1st Qu.:-0.5989  
##  Median : 0.3808   Median :-0.1811   Median :-0.1449  
##  Mean   : 0.0000   Mean   : 0.0000   Mean   : 0.0000  
##  3rd Qu.: 0.4332   3rd Qu.: 0.6024   3rd Qu.: 0.2683  
##  Max.   : 0.4406   Max.   : 3.5453   Max.   : 2.9865

The data is now standardized to zero mean.

Create a categorical variable of the crime rate

boston_scaled <- as.data.frame(boston_scaled)

bins <- quantile(boston_scaled$crim) # create a quantile vector of crim and print it

# create a categorical variable 'crime' 
crime <- cut(boston_scaled$crim, breaks = bins, include.lowest = TRUE, label = c("low", "med_low", "med_high", "high"))
table(crime) # print the table
## crime
##      low  med_low med_high     high 
##      127      126      126      127
boston_scaled <- cbind(boston_scaled, crime) # bind crime to boston_scaled
boston_scaled <- dplyr::select(boston_scaled, -crim) # remove original crim from the dataset

Divide the dataset to train (80%) and test (20%) sets

n <- nrow(boston_scaled) # number of rows
ind <- sample(n,  size = n * 0.8) # choose randomly 80% of the rows
train <- boston_scaled[ind,] # create the train set
test <- boston_scaled[-ind,] # create the test set 

Fitting a linear discriminant analysis (LDA) to the dataset

lda.fit <- lda(crime  ~., data = train) # crime rate is the target variable and all the other variables are predictors 
lda.fit # print the solution
## Call:
## lda(crime ~ ., data = train)
## 
## Prior probabilities of groups:
##       low   med_low  med_high      high 
## 0.2500000 0.2326733 0.2549505 0.2623762 
## 
## Group means:
##                   zn      indus        chas        nox          rm
## low       0.90244601 -0.9371897 -0.11640431 -0.8700196  0.46097703
## med_low  -0.09153895 -0.3357516  0.02085925 -0.5793284 -0.12569468
## med_high -0.40231848  0.2100458  0.10991367  0.4090695  0.06498116
## high     -0.48724019  1.0170298 -0.04947434  1.0395904 -0.42017092
##                 age        dis        rad        tax     ptratio
## low      -0.8610394  0.8640828 -0.6941859 -0.7475483 -0.45645807
## med_low  -0.2864485  0.3641405 -0.5469199 -0.4829603 -0.06594514
## med_high  0.3764263 -0.3558484 -0.4054077 -0.2776276 -0.22476388
## high      0.8012524 -0.8442799  1.6390172  1.5146914  0.78181164
##                black       lstat         medv
## low       0.37228329 -0.77340942  0.531470092
## med_low   0.34952340 -0.11381278 -0.001831976
## med_high  0.07191195 -0.00509227  0.119836105
## high     -0.76808629  0.82978163 -0.650920100
## 
## Coefficients of linear discriminants:
##                  LD1          LD2         LD3
## zn       0.122949113  0.712150609 -0.88188706
## indus    0.008284039 -0.517682388  0.24108608
## chas    -0.071289777 -0.045976736  0.23867067
## nox      0.403292848 -0.602953870 -1.44991280
## rm      -0.115324315 -0.094256039 -0.17442791
## age      0.245683563 -0.291068135 -0.01331376
## dis     -0.059939310 -0.303915001  0.19687130
## rad      3.093040929  0.904320304 -0.02657563
## tax      0.098869222  0.065821958  0.51565361
## ptratio  0.170510840  0.016103945 -0.27268599
## black   -0.134954784  0.001957925  0.14234644
## lstat    0.182797839 -0.202861940  0.50921118
## medv     0.177781613 -0.330521115 -0.16892390
## 
## Proportion of trace:
##    LD1    LD2    LD3 
## 0.9512 0.0361 0.0127
classes <- as.numeric(train$crime) # target classes as numeric
plot(lda.fit, dimen = 2, col = classes, pch = classes) # plot the lda results

Predict the classes with the LDA model on the test data

correct_classes <- test$crime # save the crime categories from the test set 
test <- dplyr::select(test, -crime) # remove crime from the test dataset
lda.pred <- predict(lda.fit, newdata = test) # predict classes with test data
table(correct = correct_classes, predicted = lda.pred$class) # Cross tabulate the results 
##           predicted
## correct    low med_low med_high high
##   low       17       8        1    0
##   med_low    8      17        7    0
##   med_high   1       8       13    1
##   high       0       0        0   21

Overall, the model performs well in predicting the crime categories. The high crime category is predicted fully, medium high is predicted well execpt small confusion with medium low crime rate category. Low and med_low categories are mixed.
### Reload and normalize Boston dataset and calculate the euclidean distances between the observations

library(MASS)
data('Boston')
boston_scaled <- scale(Boston)
boston_scaled <- as.data.frame(boston_scaled)
dist_eu <- dist(boston_scaled) #Calculate the  distances

Cluster the dataset with k-means function and plot the clusters. First, investigate what is the optimal number of clusters by looking at how the sum of squared distance changes with the number of clusters

k_max <- 25 # determine the maximum number of clusters
twcss <- sapply(1:k_max, function(k){kmeans(boston_scaled, k)$tot.withinss}) # calculate the total within sum of squares
qplot(x = 1:k_max, y = twcss, geom = 'line') # visualize the results

km <-kmeans(boston_scaled, centers = 5) # k-means clustering
pairs(boston_scaled, col = km$cluster) # plot the clusters

Five seems to be an optimal number of clusters, because a drop in the sum of squared distances around there.

library(MASS)
data('Boston')
boston_scaled <- scale(Boston)
boston_scaled <- as.data.frame(boston_scaled)
# Perform k-means on the original Boston data with some reasonable number of clusters (> 2)
km_4 <-kmeans(boston_scaled, centers = 4) # k-means clustering
clusters <- as.numeric(km_4$cluster)
lda.fit4 <- lda(clusters  ~., data = boston_scaled) # perform LDA using the clusters  
lda.fit4
## Call:
## lda(clusters ~ ., data = boston_scaled)
## 
## Prior probabilities of groups:
##         1         2         3         4 
## 0.2332016 0.3893281 0.1146245 0.2628458 
## 
## Group means:
##         crim         zn      indus        chas        nox           rm
## 1 -0.4072983  1.3488460 -1.0489403 -0.07213753 -0.9627567  0.922304040
## 2 -0.3871020 -0.3660089 -0.2833876 -0.27232907 -0.3754584 -0.241735612
## 3 -0.1843663 -0.3837437  0.6098682  1.69622105  1.0322548  0.002017225
## 4  1.0151393 -0.4872402  1.0844358 -0.27232907  0.9601489 -0.461104963
##           age        dis        rad         tax    ptratio       black
## 1 -1.10972919  1.0807448 -0.5993733 -0.68968282 -0.7098976  0.35819438
## 2 -0.07883316  0.1232421 -0.5959396 -0.57739980  0.2060048  0.31351282
## 3  0.76897563 -0.7254500 -0.2056659 -0.07209657 -1.1310386 -0.07260367
## 4  0.76599691 -0.8250412  1.5041713  1.49858597  0.8179338 -0.75051090
##         lstat       medv
## 1 -0.94901319  0.9745411
## 2 -0.09369809 -0.1514286
## 3  0.08834853  0.3334956
## 4  0.94223959 -0.7857681
## 
## Coefficients of linear discriminants:
##                  LD1         LD2         LD3
## crim     0.001805375  0.03483553 -0.16424962
## zn      -0.143676694  0.30142374 -1.00100167
## indus    0.529820877 -0.32164482 -0.12787856
## chas    -0.169957839 -1.07643063 -0.22987661
## nox     -0.332426739 -0.88811286 -0.46067691
## rm      -0.084740433  0.33926277 -0.42481554
## age      0.192642764 -0.38943827  0.44112414
## dis     -0.281428628 -0.28386076 -0.28827382
## rad      1.614794433  0.47040915 -0.38354502
## tax      0.857013760  0.07587940 -0.43575564
## ptratio -0.117054277  0.81783999  0.29884435
## black   -0.053671983  0.07950907  0.07459117
## lstat    0.193067402  0.32859673 -0.34012884
## medv    -0.251837637 -0.04374893 -0.56968746
## 
## Proportion of trace:
##    LD1    LD2    LD3 
## 0.6989 0.1789 0.1221
# the function to draw lda biplot arrows 
my_scale = 3 # unable to draw the arrows - i'm not sure what myscale does -tested with different values without any effect 
lda.arrows <- function(x, myscale = my_scale, arrow_heads = 0.1, color = "black", tex = 0.75, choices = c(1,2)){
  heads <- coef(x)
  arrows(x0 = 0, y0 = 0, 
         x1 = myscale * heads[,choices[1]], 
         y1 = myscale * heads[,choices[2]], col=color, length = arrow_heads)
  text(myscale * heads[,choices], labels = row.names(heads), 
       cex = tex, col=color, pos=3)
}

plot(lda.fit4, dimen = 3, col = clusters, pch = clusters)
lda.arrows(lda.fit4, myscale = my_scale)


Week 5 - Dimensionality reduction techniques

rm(list=ls())
setwd("/Users/jsimola/Documents/GitHub/IODS-project/")
human <- read.csv(file="data/human.csv", row.names = 1)

Visualize the variable distributions and their relationships and show summaries

ggpairs(human)

cor(human)%>% corrplot(method = "color")

summary(human)
##     eduRatio        workRatio      edu.expectancy  life.expectancy
##  Min.   :0.1717   Min.   :0.1857   Min.   : 5.40   Min.   :49.00  
##  1st Qu.:0.7264   1st Qu.:0.5984   1st Qu.:11.25   1st Qu.:66.30  
##  Median :0.9375   Median :0.7535   Median :13.50   Median :74.20  
##  Mean   :0.8529   Mean   :0.7074   Mean   :13.18   Mean   :71.65  
##  3rd Qu.:0.9968   3rd Qu.:0.8535   3rd Qu.:15.20   3rd Qu.:77.25  
##  Max.   :1.4967   Max.   :1.0380   Max.   :20.20   Max.   :83.50  
##       GNI              MMR              ABR            F.inParl    
##  Min.   :   581   Min.   :   1.0   Min.   :  0.60   Min.   : 0.00  
##  1st Qu.:  4198   1st Qu.:  11.5   1st Qu.: 12.65   1st Qu.:12.40  
##  Median : 12040   Median :  49.0   Median : 33.60   Median :19.30  
##  Mean   : 17628   Mean   : 149.1   Mean   : 47.16   Mean   :20.91  
##  3rd Qu.: 24512   3rd Qu.: 190.0   3rd Qu.: 71.95   3rd Qu.:27.95  
##  Max.   :123124   Max.   :1100.0   Max.   :204.80   Max.   :57.50
  • eduRatio, edu.expectancy and F.inParl are almost normally distributed, life.expectancy and workRatio are left-skewed, and CNI, MMR and ABR are right-skewed
  • The ratio of Female / Male populations with secondary education (eduRatio) correlates positively with the expected years of schooling (edu.expectancy) and with life expectancy at birth, and negatively with maternal mortality ratio (MMR) and with adolescent birth rate (ABR).
  • Expected years of schooling correlates positively with life expectancy at birth and with gross national income (GNI) per capita and negatively with maternal mortality ratio and with the adolescent birth rate.
  • Life expectancy at birth correlates positively with gross national income (GNI) per capita and negatively with maternal mortality ratio and adolescent birth rate.
  • Maternal mortality ratio correlates positively with adolescent birth rate.

Perform principal component analysis (PCA) on the not standardized data

human_pca <- prcomp(human)
summary(human_pca) # explore the variability captured by the principal components
## Importance of components:
##                              PC1      PC2   PC3   PC4   PC5   PC6    PC7
## Standard deviation     1.854e+04 185.5219 25.19 11.45 3.766 1.566 0.1912
## Proportion of Variance 9.999e-01   0.0001  0.00  0.00 0.000 0.000 0.0000
## Cumulative Proportion  9.999e-01   1.0000  1.00  1.00 1.000 1.000 1.0000
##                           PC8
## Standard deviation     0.1591
## Proportion of Variance 0.0000
## Cumulative Proportion  1.0000
options(warn = -1) # Dont' want to see the "zero-length arrow is of indeterminate angle and so skipped"
biplot(human_pca, choices = 1:2, cex = c(0.5, 0.8), col = c("grey40", "deeppink2"), xlim=c(-0.5, 0.5), ylim=c(-0.4, 0.2)) # Draw a biplot displaying the observations by the first two principal components

  • PC1 explains 99.9999% of the variance
  • PC2 explains 0.0001% of the variance

Standardize the data, do PCA and add descriptive labels

human_std <- scale(human) # standardize the variables
human_pca_std <- prcomp(human_std) # Do PCA for the standardized data
s <- summary(human_pca_std)
s # show summary
## Importance of components:
##                           PC1    PC2     PC3     PC4     PC5     PC6
## Standard deviation     2.0708 1.1397 0.87505 0.77886 0.66196 0.53631
## Proportion of Variance 0.5361 0.1624 0.09571 0.07583 0.05477 0.03595
## Cumulative Proportion  0.5361 0.6984 0.79413 0.86996 0.92473 0.96069
##                            PC7     PC8
## Standard deviation     0.45900 0.32224
## Proportion of Variance 0.02634 0.01298
## Cumulative Proportion  0.98702 1.00000
pca_pr <- round(1*s$importance[2, ], digits = 5) # rounded percetanges of variance captured by each PC
pca_pr <- pca_pr*100 # # print out the percentages of variance
pc_lab <- paste0(names(pca_pr), " (", pca_pr, "%)") # create labels

# draw a biplot
biplot(human_pca_std, cex = c(0.8, 1), col = c("grey40", "deeppink2"), xlab = pc_lab[1], ylab = pc_lab[2])

# modify labels 
lab <- c("Maternal mortality & births by adolescents rate  ", "education & life expectancy at birth ")
pc_lab2 <- paste0(lab, "(", pca_pr, "%)") # create labels
biplot(human_pca_std, cex = c(0.8, 1), col = c("grey40", "deeppink2"), xlab = pc_lab2[1], ylab = pc_lab2[2])

The PCA results differ depending on whether the analysis is performed for the non-standardized vs. standardized data. In the non-standarized data, PC1 explained almost 100% of the variance (due to large differences in the standard deviantions (SD) between the components). In the standardized data, the SDs are not that different and the percentages of variance explained by the components are: PC1 - 53.61%, PC2 - 16.24%, PC3 - 9.57%, PC4 - 7.58%, PC5 - 5.48%, PC6 - 3.60%, PC7 - 2.63%, PC8 - 1.30%.

The first principal component (PC) dimension describes the circumstances of women giving birth, i.e., their mortality rate during labour and the age at which woment give birth.

The second PC dimension describes the expeted education and lifetime at birth.

Multiple Correspondence Analysis (MCA)

library(FactoMineR) # access library

Load and explore the data

data("tea") # load tea dataset
str(tea) # 36 variables, 300 observations based on a questionnaire on tea consumption
## 'data.frame':    300 obs. of  36 variables:
##  $ breakfast       : Factor w/ 2 levels "breakfast","Not.breakfast": 1 1 2 2 1 2 1 2 1 1 ...
##  $ tea.time        : Factor w/ 2 levels "Not.tea time",..: 1 1 2 1 1 1 2 2 2 1 ...
##  $ evening         : Factor w/ 2 levels "evening","Not.evening": 2 2 1 2 1 2 2 1 2 1 ...
##  $ lunch           : Factor w/ 2 levels "lunch","Not.lunch": 2 2 2 2 2 2 2 2 2 2 ...
##  $ dinner          : Factor w/ 2 levels "dinner","Not.dinner": 2 2 1 1 2 1 2 2 2 2 ...
##  $ always          : Factor w/ 2 levels "always","Not.always": 2 2 2 2 1 2 2 2 2 2 ...
##  $ home            : Factor w/ 2 levels "home","Not.home": 1 1 1 1 1 1 1 1 1 1 ...
##  $ work            : Factor w/ 2 levels "Not.work","work": 1 1 2 1 1 1 1 1 1 1 ...
##  $ tearoom         : Factor w/ 2 levels "Not.tearoom",..: 1 1 1 1 1 1 1 1 1 2 ...
##  $ friends         : Factor w/ 2 levels "friends","Not.friends": 2 2 1 2 2 2 1 2 2 2 ...
##  $ resto           : Factor w/ 2 levels "Not.resto","resto": 1 1 2 1 1 1 1 1 1 1 ...
##  $ pub             : Factor w/ 2 levels "Not.pub","pub": 1 1 1 1 1 1 1 1 1 1 ...
##  $ Tea             : Factor w/ 3 levels "black","Earl Grey",..: 1 1 2 2 2 2 2 1 2 1 ...
##  $ How             : Factor w/ 4 levels "alone","lemon",..: 1 3 1 1 1 1 1 3 3 1 ...
##  $ sugar           : Factor w/ 2 levels "No.sugar","sugar": 2 1 1 2 1 1 1 1 1 1 ...
##  $ how             : Factor w/ 3 levels "tea bag","tea bag+unpackaged",..: 1 1 1 1 1 1 1 1 2 2 ...
##  $ where           : Factor w/ 3 levels "chain store",..: 1 1 1 1 1 1 1 1 2 2 ...
##  $ price           : Factor w/ 6 levels "p_branded","p_cheap",..: 4 6 6 6 6 3 6 6 5 5 ...
##  $ age             : int  39 45 47 23 48 21 37 36 40 37 ...
##  $ sex             : Factor w/ 2 levels "F","M": 2 1 1 2 2 2 2 1 2 2 ...
##  $ SPC             : Factor w/ 7 levels "employee","middle",..: 2 2 4 6 1 6 5 2 5 5 ...
##  $ Sport           : Factor w/ 2 levels "Not.sportsman",..: 2 2 2 1 2 2 2 2 2 1 ...
##  $ age_Q           : Factor w/ 5 levels "15-24","25-34",..: 3 4 4 1 4 1 3 3 3 3 ...
##  $ frequency       : Factor w/ 4 levels "1/day","1 to 2/week",..: 1 1 3 1 3 1 4 2 3 3 ...
##  $ escape.exoticism: Factor w/ 2 levels "escape-exoticism",..: 2 1 2 1 1 2 2 2 2 2 ...
##  $ spirituality    : Factor w/ 2 levels "Not.spirituality",..: 1 1 1 2 2 1 1 1 1 1 ...
##  $ healthy         : Factor w/ 2 levels "healthy","Not.healthy": 1 1 1 1 2 1 1 1 2 1 ...
##  $ diuretic        : Factor w/ 2 levels "diuretic","Not.diuretic": 2 1 1 2 1 2 2 2 2 1 ...
##  $ friendliness    : Factor w/ 2 levels "friendliness",..: 2 2 1 2 1 2 2 1 2 1 ...
##  $ iron.absorption : Factor w/ 2 levels "iron absorption",..: 2 2 2 2 2 2 2 2 2 2 ...
##  $ feminine        : Factor w/ 2 levels "feminine","Not.feminine": 2 2 2 2 2 2 2 1 2 2 ...
##  $ sophisticated   : Factor w/ 2 levels "Not.sophisticated",..: 1 1 1 2 1 1 1 2 2 1 ...
##  $ slimming        : Factor w/ 2 levels "No.slimming",..: 1 1 1 1 1 1 1 1 1 1 ...
##  $ exciting        : Factor w/ 2 levels "exciting","No.exciting": 2 1 2 2 2 2 2 2 2 2 ...
##  $ relaxing        : Factor w/ 2 levels "No.relaxing",..: 1 1 2 2 2 2 2 2 2 2 ...
##  $ effect.on.health: Factor w/ 2 levels "effect on health",..: 2 2 2 2 2 2 2 2 2 2 ...

Select subset of variables and do MCA

keep_columns <- c("Tea", "How", "how", "sugar", "where", "lunch") # column names to keep in the dataset
tea_time <- dplyr::select(tea, one_of(keep_columns))
mca_tea_time <- MCA(tea_time, graph = FALSE) # multiple correspondence analysis
summary(mca_tea_time)
## 
## Call:
## MCA(X = tea_time, graph = FALSE) 
## 
## 
## Eigenvalues
##                        Dim.1   Dim.2   Dim.3   Dim.4   Dim.5   Dim.6
## Variance               0.279   0.261   0.219   0.189   0.177   0.156
## % of var.             15.238  14.232  11.964  10.333   9.667   8.519
## Cumulative % of var.  15.238  29.471  41.435  51.768  61.434  69.953
##                        Dim.7   Dim.8   Dim.9  Dim.10  Dim.11
## Variance               0.144   0.141   0.117   0.087   0.062
## % of var.              7.841   7.705   6.392   4.724   3.385
## Cumulative % of var.  77.794  85.500  91.891  96.615 100.000
## 
## Individuals (the 10 first)
##                       Dim.1    ctr   cos2    Dim.2    ctr   cos2    Dim.3
## 1                  | -0.298  0.106  0.086 | -0.328  0.137  0.105 | -0.327
## 2                  | -0.237  0.067  0.036 | -0.136  0.024  0.012 | -0.695
## 3                  | -0.369  0.162  0.231 | -0.300  0.115  0.153 | -0.202
## 4                  | -0.530  0.335  0.460 | -0.318  0.129  0.166 |  0.211
## 5                  | -0.369  0.162  0.231 | -0.300  0.115  0.153 | -0.202
## 6                  | -0.369  0.162  0.231 | -0.300  0.115  0.153 | -0.202
## 7                  | -0.369  0.162  0.231 | -0.300  0.115  0.153 | -0.202
## 8                  | -0.237  0.067  0.036 | -0.136  0.024  0.012 | -0.695
## 9                  |  0.143  0.024  0.012 |  0.871  0.969  0.435 | -0.067
## 10                 |  0.476  0.271  0.140 |  0.687  0.604  0.291 | -0.650
##                       ctr   cos2  
## 1                   0.163  0.104 |
## 2                   0.735  0.314 |
## 3                   0.062  0.069 |
## 4                   0.068  0.073 |
## 5                   0.062  0.069 |
## 6                   0.062  0.069 |
## 7                   0.062  0.069 |
## 8                   0.735  0.314 |
## 9                   0.007  0.003 |
## 10                  0.643  0.261 |
## 
## Categories (the 10 first)
##                        Dim.1     ctr    cos2  v.test     Dim.2     ctr
## black              |   0.473   3.288   0.073   4.677 |   0.094   0.139
## Earl Grey          |  -0.264   2.680   0.126  -6.137 |   0.123   0.626
## green              |   0.486   1.547   0.029   2.952 |  -0.933   6.111
## alone              |  -0.018   0.012   0.001  -0.418 |  -0.262   2.841
## lemon              |   0.669   2.938   0.055   4.068 |   0.531   1.979
## milk               |  -0.337   1.420   0.030  -3.002 |   0.272   0.990
## other              |   0.288   0.148   0.003   0.876 |   1.820   6.347
## tea bag            |  -0.608  12.499   0.483 -12.023 |  -0.351   4.459
## tea bag+unpackaged |   0.350   2.289   0.056   4.088 |   1.024  20.968
## unpackaged         |   1.958  27.432   0.523  12.499 |  -1.015   7.898
##                       cos2  v.test     Dim.3     ctr    cos2  v.test  
## black                0.003   0.929 |  -1.081  21.888   0.382 -10.692 |
## Earl Grey            0.027   2.867 |   0.433   9.160   0.338  10.053 |
## green                0.107  -5.669 |  -0.108   0.098   0.001  -0.659 |
## alone                0.127  -6.164 |  -0.113   0.627   0.024  -2.655 |
## lemon                0.035   3.226 |   1.329  14.771   0.218   8.081 |
## milk                 0.020   2.422 |   0.013   0.003   0.000   0.116 |
## other                0.102   5.534 |  -2.524  14.526   0.197  -7.676 |
## tea bag              0.161  -6.941 |  -0.065   0.183   0.006  -1.287 |
## tea bag+unpackaged   0.478  11.956 |   0.019   0.009   0.000   0.226 |
## unpackaged           0.141  -6.482 |   0.257   0.602   0.009   1.640 |
## 
## Categorical variables (eta2)
##                      Dim.1 Dim.2 Dim.3  
## Tea                | 0.126 0.108 0.410 |
## How                | 0.076 0.190 0.394 |
## how                | 0.708 0.522 0.010 |
## sugar              | 0.065 0.001 0.336 |
## where              | 0.702 0.681 0.055 |
## lunch              | 0.000 0.064 0.111 |
plot(mca_tea_time, invisible=c("ind"), habillage = "quali") # visualize MCA

  • Both dimensions 1 and 2 correlate strongly with variables ‘how’ (0.71, 0.52 respectively) and ‘where’ (0.71, 0.68). These variables indicate how tea is packed and where it has been bought. In addition, dimension 1 correlates weakly with variable ‘Tea’ (0.13) that indicates the type of tea (e.g., black, green), while dimension 2 correlates weakly with ‘How’ that indicates if tea is consumed with milk, lemon, etc.
  • The MCA biplot shows that dimensions 1 explains 15.24% of the variance and dimension 2 explains 14.23% of the variance. Distance in this plot is a measure of similarity, which shows that unpackaged green tea that is bought from a special tea shop differs from the other categories.